Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38526881

ABSTRACT

Accurately diagnosing chronic kidney disease requires pathologists to assess the structure of multiple tissues under different stains, a process that is timeconsuming and labor-intensive. Current AI-based methods for automatic structure assessment, like segmentation, often demand extensive manual annotation and focus on single stain domain. To address these challenges, we introduce MSMTSeg, a generative self-supervised meta-learning framework for multi-stained multi-tissue segmentation in renal biopsy whole slide images (WSIs). MSMTSeg incorporates multiple stain transform models for style translation of inter-stain domains, a self-supervision module for obtaining pre-trained models with the domain-specific feature representation, and a meta-learning strategy that leverages generated virtual data and pre-trained models to learn the domain-invariant feature representation across multiple stains, thereby enhancing segmentation performance. Experimental results demonstrate that MSMTSeg achieves superior and robust performance, with mDSC of 0.836 and mIoU of 0.718 for multiple tissues under different stains, using only one annotated training sample for each stain. Our ablation study confirms the effectiveness of each component, positioning MSMTSeg ahead of classic advanced segmentation networks, recent few-shot segmentation methods, and unsupervised domain adaptation methods. In conclusion, our proposed few-shot cross-domain technology offers a feasible and cost-effective solution for multi-stained renal histology segmentation, providing convenient assistance to pathologists in clinical practice. The source code and conditionally accessible data are available at https://github.com/SnowRain510/MSMTSeg.

2.
J Xray Sci Technol ; 32(2): 323-338, 2024.
Article in English | MEDLINE | ID: mdl-38306087

ABSTRACT

BACKGROUND: Interstitial lung disease (ILD) represents a group of chronic heterogeneous diseases, and current clinical practice in assessment of ILD severity and progression mainly rely on the radiologist-based visual screening, which greatly restricts the accuracy of disease assessment due to the high inter- and intra-subjective observer variability. OBJECTIVE: To solve these problems, in this work, we propose a deep learning driven framework that can assess and quantify lesion indicators and outcome the prediction of severity of ILD. METHODS: In detail, we first present a convolutional neural network that can segment and quantify five types of lesions including HC, RO, GGO, CONS, and EMPH from HRCT of ILD patients, and then we conduct quantitative analysis to select the features related to ILD based on the segmented lesions and clinical data. Finally, a multivariate prediction model based on nomogram to predict the severity of ILD is established by combining multiple typical lesions. RESULTS: Experimental results showed that three lesions of HC, RO, and GGO could accurately predict ILD staging independently or combined with other HRCT features. Based on the HRCT, the used multivariate model can achieve the highest AUC value of 0.755 for HC, and the lowest AUC value of 0.701 for RO in stage I, and obtain the highest AUC value of 0.803 for HC, and the lowest AUC value of 0.733 for RO in stage II. Additionally, our ILD scoring model could achieve an average accuracy of 0.812 (0.736 - 0.888) in predicting the severity of ILD via cross-validation. CONCLUSIONS: In summary, our proposed method provides effective segmentation of ILD lesions by a comprehensive deep-learning approach and confirms its potential effectiveness in improving diagnostic accuracy for clinicians.


Subject(s)
Deep Learning , Lung Diseases, Interstitial , Humans , Tomography, X-Ray Computed/methods , Lung Diseases, Interstitial/diagnostic imaging , Lung/diagnostic imaging , Lung/pathology , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...